design draft
Intelligent Parsing: An Automated Parsing Framework for Extracting Design Semantics from E-commerce Creatives
In the industrial e-commerce landscape, creative designs such as banners and posters are ubiquitous. Extracting structured semantic information from creative e-commerce design materials (manuscripts crafted by designers) to obtain design semantics represents a core challenge in the realm of intelligent design. In this paper, we propose a comprehensive automated framework for intelligently parsing creative materials. This framework comprises material recognition, preprocess, smartname, and label layers. The material recognition layer consolidates various detection and recognition interfaces, covering business aspects including detection of auxiliary areas within creative materials and layer-level detection, alongside label identification. Algorithmically, it encompasses a variety of coarse-to-fine methods such as Cascade RCNN, GFL, and other models. The preprocess layer involves filtering creative layers and grading creative materials. The smartname layer achieves intelligent naming for creative materials, while the label layer covers multi-level tagging for creative materials, enabling tagging at different hierarchical levels. Intelligent parsing constitutes a complete parsing framework that significantly aids downstream processes such as intelligent creation, creative optimization, and material library construction. Within the practical business applications at Suning, it markedly enhances the exposure, circulation, and click-through rates of creative materials, expediting the closed-loop production of creative materials and yielding substantial benefits.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Asia > China > Jiangsu Province > Nanjing (0.04)
- Health & Medicine (1.00)
- Information Technology > Services > e-Commerce Services (0.81)
UI Layers Merger: Merging UI layers via Visual Learning and Boundary Prior
Chen, Yun-nong, Zhen, Yan-kun, Shi, Chu-ning, Li, Jia-zhi, Chen, Liu-qing, Li, Ze-jian, Sun, Ling-yun, Zhou, Ting-ting, Chang, Yan-fang
With the fast-growing GUI development workload in the Internet industry, some work on intelligent methods attempted to generate maintainable front-end code from UI screenshots. It can be more suitable for utilizing UI design drafts that contain UI metadata. However, fragmented layers inevitably appear in the UI design drafts which greatly reduces the quality of code generation. None of the existing GUI automated techniques detects and merges the fragmented layers to improve the accessibility of generated code. In this paper, we propose UI Layers Merger (UILM), a vision-based method, which can automatically detect and merge fragmented layers into UI components. Our UILM contains Merging Area Detector (MAD) and a layers merging algorithm. MAD incorporates the boundary prior knowledge to accurately detect the boundaries of UI components. Then, the layers merging algorithm can search out the associated layers within the components' boundaries and merge them into a whole part. We present a dynamic data augmentation approach to boost the performance of MAD. We also construct a large-scale UI dataset for training the MAD and testing the performance of UILM. The experiment shows that the proposed method outperforms the best baseline regarding merging area detection and achieves a decent accuracy regarding layers merging.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- (16 more...)
ULDGNN: A Fragmented UI Layer Detector Based on Graph Neural Networks
Li, Jiazhi, Zhou, Tingting, Chen, Yunnong, Chang, Yanfang, Zhen, Yankun, Sun, Lingyun, Chen, Liuqing
While some work attempt to generate front-end code intelligently from UI screenshots, it may be more convenient to utilize UI design drafts in Sketch which is a popular UI design software, because we can access multimodal UI information directly such as layers type, position, size, and visual images. However, fragmented layers could degrade the code quality without being merged into a whole part if all of them are involved in the code generation. In this paper, we propose a pipeline to merge fragmented layers automatically. We first construct a graph representation for the layer tree of a UI draft and detect all fragmented layers based on the visual features and graph neural networks. Then a rule-based algorithm is designed to merge fragmented layers. Through experiments on a newly constructed dataset, our approach can retrieve most fragmented layers in UI design drafts, and achieve 87% accuracy in the detection task, and the post-processing algorithm is developed to cluster associative layers under simple and general circumstances.
- Asia > China > Zhejiang Province > Hangzhou (0.05)
- North America > United States > New York > New York County > New York City (0.05)
- Europe > Finland > Northern Ostrobothnia > Oulu (0.04)